本文解决了现场机器人动态运动下动态在线估计和3轴磁力计的硬铁和软铁偏置的动态在线估计和补偿问题,利用了3轴磁力计和3轴角度的偏置测量速率传感器。所提出的磁力计和角速度偏差估计器(MAVBE)利用了对经受角速度偏移的磁力计信号的非线性处理动态的15状态过程模型,同时估计9个磁力计偏置参数和3个角速率传感器偏置参数,在扩展卡尔曼过滤器框架。偏置参数局部可操作性在数值评估。偏置补偿信号与3轴加速度计信号一起用于估计偏置补偿磁力大地测量标题。与Chesapeake Bay,MD,MD,MD,MD,MD,MD,MD,MD,MD的数值模拟,实验室实验和全规模场试验中,评估了所提出的MAVBE方法的性能。对于所提出的Mavbe,(i)仪器态度不需要估计偏差,结果表明(ii)偏差是本地可观察的,(iii)偏差估计迅速收敛到真正的偏置参数,(iv)仅适用于适度的仪器偏压估计收敛需要激发,并且(v)对磁力计硬铁和柔软铁偏差的补偿显着提高了动态前线估计精度。
translated by 谷歌翻译
本文介绍了在外部应用力和时刻的6自由度刚体动态系统的二阶非线性模型的自适应识别(AID)的质量识别(AID)可观察性的长期开放问题。虽然已经报道了这种系统的植物参数的稳定方法,但据报道,稳定模型的直接自适应轨迹跟踪控制的稳定模型的直接自适应轨迹跟踪控制,这些研究已经无法分析自适应参数估计收敛到真正的工厂参数值。本文向这类系统稳定的自适应标识符报告了6-DOF植物惯性参数的均匀完全可观察性(UCO)的必要和充分条件。当满足UCO条件时,Adaptive参数估计被显示为True工厂参数值会聚。据我们所知,这是第一个据报道的植物参数UCO系统和适应性参数估算到真正参数值的融合。 We also report a numerical simulation study of this AID approach which shows that (a) the UCO condition can be met for fully-actuated plants as well as underactuated plants with the proper choice of control input and (b) convergence of adaptive parameter estimates to真正的参数值。我们猜想这种方法可以扩展到包括出现刚体植物模型的其他参数,包括用于阻力,浮力,增加质量,偏置和执行器的参数。
translated by 谷歌翻译
语音神经调节物有可能为患有扰动或休闲症的人提供沟通。最近的进展已经证明了从放置在皮质表面上的电加电网的高质量文本解码和语音合成。在这里,我们研究了较少的侵入性测量模态,即立体定向脑电图(SEEG),其提供来自多个脑区的稀疏抽样,包括皮质区域。为了评估Seeg是否也可用于综合神经录音的高质量音频,我们采用了一种基于现代深度学习方法的经常性编码器 - 解码器框架。我们证明,尽管有限的训练数据,但是可以从这些微创录音来重建高质量的言论。最后,我们利用变分特征丢失来成功识别最具信息丰富的电极触点。
translated by 谷歌翻译
Recent work has shown the benefits of synthetic data for use in computer vision, with applications ranging from autonomous driving to face landmark detection and reconstruction. There are a number of benefits of using synthetic data from privacy preservation and bias elimination to quality and feasibility of annotation. Generating human-centered synthetic data is a particular challenge in terms of realism and domain-gap, though recent work has shown that effective machine learning models can be trained using synthetic face data alone. We show that this can be extended to include the full body by building on the pipeline of Wood et al. to generate synthetic images of humans in their entirety, with ground-truth annotations for computer vision applications. In this report we describe how we construct a parametric model of the face and body, including articulated hands; our rendering pipeline to generate realistic images of humans based on this body model; an approach for training DNNs to regress a dense set of landmarks covering the entire body; and a method for fitting our body model to dense landmarks predicted from multiple views.
translated by 谷歌翻译
While the brain connectivity network can inform the understanding and diagnosis of developmental dyslexia, its cause-effect relationships have not yet enough been examined. Employing electroencephalography signals and band-limited white noise stimulus at 4.8 Hz (prosodic-syllabic frequency), we measure the phase Granger causalities among channels to identify differences between dyslexic learners and controls, thereby proposing a method to calculate directional connectivity. As causal relationships run in both directions, we explore three scenarios, namely channels' activity as sources, as sinks, and in total. Our proposed method can be used for both classification and exploratory analysis. In all scenarios, we find confirmation of the established right-lateralized Theta sampling network anomaly, in line with the temporal sampling framework's assumption of oscillatory differences in the Theta and Gamma bands. Further, we show that this anomaly primarily occurs in the causal relationships of channels acting as sinks, where it is significantly more pronounced than when only total activity is observed. In the sink scenario, our classifier obtains 0.84 and 0.88 accuracy and 0.87 and 0.93 AUC for the Theta and Gamma bands, respectively.
translated by 谷歌翻译
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .
translated by 谷歌翻译
Differentiable Architecture Search (DARTS) has attracted considerable attention as a gradient-based Neural Architecture Search (NAS) method. Since the introduction of DARTS, there has been little work done on adapting the action space based on state-of-art architecture design principles for CNNs. In this work, we aim to address this gap by incrementally augmenting the DARTS search space with micro-design changes inspired by ConvNeXt and studying the trade-off between accuracy, evaluation layer count, and computational cost. To this end, we introduce the Pseudo-Inverted Bottleneck conv block intending to reduce the computational footprint of the inverted bottleneck block proposed in ConvNeXt. Our proposed architecture is much less sensitive to evaluation layer count and outperforms a DARTS network with similar size significantly, at layer counts as small as 2. Furthermore, with less layers, not only does it achieve higher accuracy with lower GMACs and parameter count, GradCAM comparisons show that our network is able to better detect distinctive features of target objects compared to DARTS.
translated by 谷歌翻译
Meta Learning automates the search for learning algorithms. At the same time, it creates a dependency on human engineering on the meta-level, where meta learning algorithms need to be designed. In this paper, we investigate self-referential meta learning systems that modify themselves without the need for explicit meta optimization. We discuss the relationship of such systems to in-context and memory-based meta learning and show that self-referential neural networks require functionality to be reused in the form of parameter sharing. Finally, we propose fitness monotonic execution (FME), a simple approach to avoid explicit meta optimization. A neural network self-modifies to solve bandit and classic control tasks, improves its self-modifications, and learns how to learn, purely by assigning more computational resources to better performing solutions.
translated by 谷歌翻译
There are two important things in science: (A) Finding answers to given questions, and (B) Coming up with good questions. Our artificial scientists not only learn to answer given questions, but also continually invent new questions, by proposing hypotheses to be verified or falsified through potentially complex and time-consuming experiments, including thought experiments akin to those of mathematicians. While an artificial scientist expands its knowledge, it remains biased towards the simplest, least costly experiments that still have surprising outcomes, until they become boring. We present an empirical analysis of the automatic generation of interesting experiments. In the first setting, we investigate self-invented experiments in a reinforcement-providing environment and show that they lead to effective exploration. In the second setting, pure thought experiments are implemented as the weights of recurrent neural networks generated by a neural experiment generator. Initially interesting thought experiments may become boring over time.
translated by 谷歌翻译
The ability to convert reciprocating, i.e., alternating, actuation into rotary motion using linkages is hindered fundamentally by their poor torque transmission capability around kinematic singularity configurations. Here, we harness the elastic potential energy of a linear spring attached to the coupler link of four-bar mechanisms to manipulate force transmission around the kinematic singularities. We developed a theoretical model to explore the parameter space for proper force transmission in slider-crank and rocker-crank four-bar kinematics. Finally, we verified the proposed model and methodology by building and testing a macro-scale prototype of a slider-crank mechanism. We expect this approach to enable the development of small-scale rotary engines and robotic devices with closed kinematic chains dealing with serial kinematic singularities, such as linkages and parallel manipulators.
translated by 谷歌翻译